人类识别对象何时已知或当前新颖的能力胜过所有开放式识别算法。通过心理学视觉心理物理学的方法和过程来衡量的人类感知可以为计算机视觉中的视觉识别任务中的新颖性提供附加的数据流。例如,人类受试者的测量反应时间可以提供有关是否可能与新颖的样本相混淆的洞察力。在这项工作中,我们设计并进行了大规模的行为实验,该实验收集了超过200,000种与物体识别相关的人类反应时间测量。收集的数据指示的反应时间在样本级别的对象之间有意义地变化。因此,我们设计了一种新的心理物理损失函数,该函数在深网中与人类行为保持一致性,该函数在不同图像中显示出可变的反应时间。与生物学愿景一样,这种方法使我们能够在标记有限的培训数据的制度中实现良好的开放式识别性能。通过使用来自ImageNet的数据的实验,当训练具有这种新配方的多尺度登记材料时,可以观察到显着改善:经过损失功能训练的模型可显着提高TOP-1验证精度7%,对已知样品的TOP-1测试准确性提高18% ,以及未知样品的TOP-1测试精度33%。我们将我们的方法与文献中的10种开放式识别方法进行了比较,这些方法在多个指标上的表现都优于。
translated by 谷歌翻译
最近,卷积神经网络(CNN)技术具有普及作为高光谱图像分类(HSIC)的工具。为了在有限样品的条件下提高HSIC的特征提取效率,目前的方法通常使用大量层的深层模型。然而,当样品有限时,深网络模型容易出现过度拟合和梯度消失问题。此外,空间分辨率严重降低,深度深度,这对空间边缘特征提取非常有害。因此,这封信提出了一种HSIC的浅模型,称为深度过度参数化卷积神经网络(DOCNN)。为了确保浅模型的有效提取,引入深度过度参数化卷积(DO-CONV)内核以提取歧视特征。深度过度参数化卷积内核由标准卷积内核和深度卷积内核组成,其可以单独地提取不同信道的空间特征,并同时熔合整个通道的空间特征。此外,为了进一步减少由于卷积操作引起的空间边缘特征的损失,提出了一种密集的残余连接(DRC)结构以适用于整个网络的特征提取部分。从三个基准数据集获得的实验结果表明,该方法在分类准确度和计算效率方面优于其他最先进的方法。
translated by 谷歌翻译
自动驾驶汽车是一项不断发展的技术,旨在通过自动操作从车道变更到超车来提高安全性,可访问性,效率和便利性。超车是自动驾驶汽车最具挑战性的操作之一,当前的自动超车技术仅限于简单情况。本文研究了如何通过允许动作流产来提高自主超车的安全性。我们提出了一个基于深层Q网络的决策过程,以确定是否以及何时需要中止超车的操作。拟议的算法在与交通情况不同的模拟中进行了经验评估,这表明所提出的方法可以改善超车手动过程中的安全性。此外,使用自动班车Iseauto在现实世界实验中证明了该方法。
translated by 谷歌翻译
培训不同子模型的合奏在经验上已被证明是改善深度神经网络的对抗性鲁棒性的有效策略。图像识别的当前集合训练方法通常通过单速向量编码图像标签,从而忽略标签之间的依赖关系。在这里,我们提出了一种新颖的对抗训练方法,该方法可以共同了解标签和模型之间的条件依赖性。我们测试了广泛使用的数据集MNIST,FASIONMNIST和CIFAR-10的方法。结果表明,与最先进的方法相比,我们的方法对黑盒攻击更为强大。我们的代码可在https://github.com/zjlab-ammi/lsd上找到。
translated by 谷歌翻译
虽然纳米波特已被用作胃镜检查等工作的临床处方,但是甚至已经提出了光声断层扫描技术,以控制纳米罗巴氏虫实时在指定的递送点递送药物,并且存在通过纳米泊氏虫血液中血液中的“超微曲线”的病例。大多数技术不成熟,无论是低效率还是低精度,要么不能批量生产,所以在这个阶段治疗癌症疾病的最有效方法是通过化疗和放射治疗。患者患有痛苦,不能治愈。因此,本文提出了一种理想的一种治疗方法的理想模型,可以完全治愈癌症,通过团队成员通信和计算机视觉图像分类(目标检测)基于纳米机器人队列的协作处理方法。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译
This paper presents a machine learning approach to multidimensional item response theory (MIRT), a class of latent factor models that can be used to model and predict student performance from observed assessment data. Inspired by collaborative filtering, we define a general class of models that includes many MIRT models. We discuss the use of penalized joint maximum likelihood (JML) to estimate individual models and cross-validation to select the best performing model. This model evaluation process can be optimized using batching techniques, such that even sparse large-scale data can be analyzed efficiently. We illustrate our approach with simulated and real data, including an example from a massive open online course (MOOC). The high-dimensional model fit to this large and sparse dataset does not lend itself well to traditional methods of factor interpretation. By analogy to recommender-system applications, we propose an alternative "validation" of the factor model, using auxiliary information about the popularity of items consulted during an open-book exam in the course.
translated by 谷歌翻译
Research on automated essay scoring has become increasing important because it serves as a method for evaluating students' written-responses at scale. Scalable methods for scoring written responses are needed as students migrate to online learning environments resulting in the need to evaluate large numbers of written-response assessments. The purpose of this study is to describe and evaluate three active learning methods than can be used to minimize the number of essays that must be scored by human raters while still providing the data needed to train a modern automated essay scoring system. The three active learning methods are the uncertainty-based, the topological-based, and the hybrid method. These three methods were used to select essays included as part of the Automated Student Assessment Prize competition that were then classified using a scoring model that was training with the bidirectional encoder representations from transformer language model. All three active learning methods produced strong results, with the topological-based method producing the most efficient classification. Growth rate accuracy was also evaluated. The active learning methods produced different levels of efficiency under different sample size allocations but, overall, all three methods were highly efficient and produced classifications that were similar to one another.
translated by 谷歌翻译
Considering the computation complexity, we propose a Guided Hybrid Quantization with One-to-one Self-Teaching (GHOST}) framework. More concretely, we first design a structure called guided quantization self-distillation (GQSD), which is an innovative idea for realizing lightweight through the synergy of quantization and distillation. The training process of the quantization model is guided by its full-precision model, which is time-saving and cost-saving without preparing a huge pre-trained model in advance. Second, we put forward a hybrid quantization (HQ) module to obtain the optimal bit width automatically under a constrained condition where a threshold for distribution distance between the center and samples is applied in the weight value search space. Third, in order to improve information transformation, we propose a one-to-one self-teaching (OST) module to give the student network a ability of self-judgment. A switch control machine (SCM) builds a bridge between the student network and teacher network in the same location to help the teacher to reduce wrong guidance and impart vital knowledge to the student. This distillation method allows a model to learn from itself and gain substantial improvement without any additional supervision. Extensive experiments on a multimodal dataset (VEDAI) and single-modality datasets (DOTA, NWPU, and DIOR) show that object detection based on GHOST outperforms the existing detectors. The tiny parameters (<9.7 MB) and Bit-Operations (BOPs) (<2158 G) compared with any remote sensing-based, lightweight or distillation-based algorithms demonstrate the superiority in the lightweight design domain. Our code and model will be released at https://github.com/icey-zhang/GHOST.
translated by 谷歌翻译